110 research outputs found

    Toward a social psychophysics of face communication

    Get PDF
    As a highly social species, humans are equipped with a powerful tool for social communication—the face, which can elicit multiple social perceptions in others due to the rich and complex variations of its movements, morphology, and complexion. Consequently, identifying precisely what face information elicits different social perceptions is a complex empirical challenge that has largely remained beyond the reach of traditional research methods. More recently, the emerging field of social psychophysics has developed new methods designed to address this challenge. Here, we introduce and review the foundational methodological developments of social psychophysics, present recent work that has advanced our understanding of the face as a tool for social communication, and discuss the main challenges that lie ahead

    Conditions for Viewpoint Dependent Face Recognition

    Get PDF
    Poggio and Vetter (1992) showed that learning one view of a bilaterally symmetric object could be sufficient for its recognition, if this view allows the computation of a symmetric, "virtual," view. Faces are roughly bilaterally symmetric objects. Learning a side-view--which always has a symmetric view--should allow for better generalization performances than learning the frontal view. Two psychophysical experiments tested these predictions. Stimuli were views of shaded 3D models of laser-scanned faces. The first experiment tested whether a particular view of a face was canonical. The second experiment tested which single views of a face give rise to best generalization performances. The results were compatible with the symmetry hypothesis: Learning a side view allowed better generalization performances than learning the frontal view

    Space-by-time non-negative matrix factorization for single-trial decoding of M/EEG activity

    Get PDF
    We develop a novel methodology for the single-trial analysis of multichannel time-varying neuroimaging signals. We introduce the space-by-time M/EEG decomposition, based on Non-negative Matrix Factorization (NMF), which describes single-trial M/EEG signals using a set of non-negative spatial and temporal components that are linearly combined with signed scalar activation coefficients. We illustrate the effectiveness of the proposed approach on an EEG dataset recorded during the performance of a visual categorization task. Our method extracts three temporal and two spatial functional components achieving a compact yet full representation of the underlying structure, which validates and summarizes succinctly results from previous studies. Furthermore, we introduce a decoding analysis that allows determining the distinct functional role of each component and relating them to experimental conditions and task parameters. In particular, we demonstrate that the presented stimulus and the task difficulty of each trial can be reliably decoded using specific combinations of components from the identified space-by-time representation. When comparing with a sliding-window linear discriminant algorithm, we show that our approach yields more robust decoding performance across participants. Overall, our findings suggest that the proposed space-by-time decomposition is a meaningful low-dimensional representation that carries the relevant information of single-trial M/EEG signals

    Four not six: revealing culturally common facial expressions of emotion

    Get PDF
    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin’s work, identifying amongst these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing six emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication, supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modelling the facial expressions of over 60 emotions across two cultures, and segregating out the latent expressive patterns. Using a multi-disciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in two cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing four latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that six facial expression patterns are universal, instead suggesting four latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics

    Frontal top-down signals increase coupling of auditory low-frequency oscillations to continuous speech in human listeners

    Get PDF
    Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1 and 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3 and 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception

    Facial expression aftereffect revealed by adaption to emotion-invisible dynamic bubbled faces

    Get PDF
    Visual adaptation is a powerful tool to probe the short-term plasticity of the visual system. Adapting to local features such as the oriented lines can distort our judgment of subsequently presented lines, the tilt aftereffect. The tilt aftereffect is believed to be processed at the low-level of the visual cortex, such as V1. Adaptation to faces, on the other hand, can produce significant aftereffects in high-level traits such as identity, expression, and ethnicity. However, whether face adaptation necessitate awareness of face features is debatable. In the current study, we investigated whether facial expression aftereffects (FEAE) can be generated by partially visible faces. We first generated partially visible faces using the bubbles technique, in which the face was seen through randomly positioned circular apertures, and selected the bubbled faces for which the subjects were unable to identify happy or sad expressions. When the subjects adapted to static displays of these partial faces, no significant FEAE was found. However, when the subjects adapted to a dynamic video display of a series of different partial faces, a significant FEAE was observed. In both conditions, subjects could not identify facial expression in the individual adapting faces. These results suggest that our visual system is able to integrate unrecognizable partial faces over a short period of time and that the integrated percept affects our judgment on subsequently presented faces. We conclude that FEAE can be generated by partial face with little facial expression cues, implying that our cognitive system fills-in the missing parts during adaptation, or the subcortical structures are activated by the bubbled faces without conscious recognition of emotion during adaptation

    Vision: face-centered representations in the brain

    Get PDF
    A longstanding debate in the face recognition field concerns the format of face representations in the brain. New face research clarifies some of this mystery by revealing a face-centered format in a patient with a left splenium lesion of the corpus callosum who perceives the right side of faces as ‘melted’

    Reverse Engineering Psychologically Valid Facial Expressions of Emotion into Social Robots

    Get PDF
    Social robots are now part of human society, destined for schools, hospitals, and homes to perform a variety of tasks. To engage their human users, social robots must be equipped with the essential social skill of facial expression communication. Yet, even state-of-the-art social robots are limited in this ability because they often rely on a restricted set of facial expressions derived from theory with well-known limitations such as lacking naturalistic dynamics. With no agreed methodology to objectively engineer a broader variance of more psychologically impactful facial expressions into the social robots' repertoire, human-robot interactions remain restricted. Here, we address this generic challenge with new methodologies that can reverse-engineer dynamic facial expressions into a social robot head. Our data-driven, user-centered approach, which combines human perception with psychophysical methods, produced highly recognizable and human-like dynamic facial expressions of the six classic emotions that generally outperformed state-of-art social robot facial expressions. Our data demonstrates the feasibility of our method applied to social robotics and highlights the benefits of using a data-driven approach that puts human users as central to deriving facial expressions for social robots. We also discuss future work to reverse-engineer a wider range of socially relevant facial expressions including conversational messages (e.g., interest, confusion) and personality traits (e.g., trustworthiness, attractiveness). Together, our results highlight the key role that psychology must continue to play in the design of social robots

    Interattribute Distances do not Represent the Identity of Real World Faces

    Get PDF
    According to an influential view, based on studies of development and of the face inversion effect, human face recognition relies mainly on the treatment of the distances among internal facial features. However, there is surprisingly little evidence supporting this claim. Here, we first use a sample of 515 face photographs to estimate the face recognition information available in interattribute distances. We demonstrate that previous studies of interattribute distances generated faces that exaggerated by 376% this information compared to real-world faces. When human observers are required to recognize faces solely on the basis of real-world interattribute distances, they perform poorly across a broad range of viewing distances (equivalent to 2 to more than 16 m in the real-world). In contrast, recognition is almost perfect when observers recognize faces on the basis of real-world information other than interattribute distances such as attribute shapes and skin properties. We conclude that facial cues other than interattribute distances such as attribute shapes and skin properties are the dominant information of face recognition mechanisms
    • …
    corecore